Latest Microsoft Dynamics 365 Blogs | CloudFronts

From Default to Dynamic: Transforming Dynamics CRM Subgrids with Custom HTML for a Netherlands-Based Sustainability Certification Non-Profit

Summary A Netherlands-based sustainability certification non-profit faced a key limitation in Dynamics CRM: the default subgrid had no way to filter related lookup values — meaning all versions and levels appeared for every certification standard, regardless of relevance. CloudFronts replaced the default subgrid with a custom HTML Web Resource that renders each certification standard as an interactive card, with its own pre-filtered version and level dropdowns. Users selecting C2C Certified® Full Scope now only see versions and levels that belong to Full Scope — not a cluttered list of every record in the related table. Beyond fixing the filtering gap, the solution transformed the CRM form experience from a flat, generic grid into a clean, modern card-based interface — significantly improving usability for both applicants and assessors. Table of Contents   1. Customer Scenario 2. The Real Problem — Unfiltered Lookups in Subgrids 3. Solution Overview 4. Key Features of the Custom Web Resource 5. How It Works — Technical Implementation 6. End-to-End Walkthrough 7. Architecture & Design Decisions 8. Business Impact 9. FAQs 10. Conclusion Customer Scenario A Netherlands-based non-profit organization uses Dynamics CRM to manage the full certification application lifecycle, from initial scoping through assessment and final issuance. As part of every certification application, users must define a Certification Scope, selecting which Cradle to Cradle standards they want assessed, choosing the correct version of that standard, and setting a target certification level. The available standards include: Full Scope Material Health Circularity Each standard has its own set of applicable versions and certification levels stored in related Dataverse tables. The challenge was making sure users could select and configure each standard correctly — without the CRM form showing them irrelevant data from other standards. The Real Problem, Unfiltered Lookups in Subgrids The original design used a default CRM subgrid to list the certification scope lines. Each row in the subgrid had lookup fields pointing to related Dataverse tables, one for Standard Version, one for Certification Level. The problem was straightforward but significant: CRM subgrid lookup fields have no native mechanism to filter their values based on another field in the same row. This meant that when a user opened the Version lookup on a Full Scope row, they saw every version across every standard — Full Scope versions, Material Health versions, Circularity versions, all mixed together in a single unfiltered list. The same issue applied to Certification Levels. There was no built-in way to say: “this row is for Full Scope — only show me Full Scope versions.” Key Pain Points Wrong version selected by mistake: With all versions in one unfiltered list, users had to manually identify and pick the correct one — easy to get wrong, especially for new staff unfamiliar with which version belongs to which standard. Cluttered, confusing lookup lists: A lookup showing 20+ mixed records when the user only needs to choose from 3–4 relevant options is a frustrating experience and a source of data quality issues. No visual structure or grouping: The default subgrid renders as a flat table. There is no way to visually distinguish one standard from another or understand the overall scope at a glance. A form that did not match the product’s quality: The client wanted their CRM environment to feel professional and polished — a plain, out-of-the-box grid did not meet that expectation. The subgrid was not broken — it was simply the wrong tool for this job. What was needed was a control that understood the relationship between a standard and its versions, and filtered accordingly. Solution Overview CloudFronts replaced the default subgrid entirely with a custom HTML Web Resource embedded directly on the CRM application form. The web resource reads a single JSON field on the record which carries each standard along with its own pre-scoped list of versions and levels. The core idea: Each certification standard gets its own card  →  Each card shows only the versions and levels that belong to that standard  →  No more mixed, unfiltered lookup lists   What This Achieves For Applicants and Assessors: Select one or more certification standards using clear, visual checkbox cards See only the versions relevant to the selected standard — nothing from other standards Choose a target level from a correctly filtered, correctly sorted dropdown Get instant visual feedback on which standards are active in the scope For the CRM Platform Team: No subgrid lookup filtering workarounds, form-level JavaScript hacks, or plugin-based filtering required All filtering is naturally handled by the JSON data structure, each standard row already carries only its own versions and levels A significantly better-looking form that reflects the quality of the certification program itself Scope configuration can be extended simply by updating the JSON, no schema changes needed Key Features of the Custom Web Resource The web resource was designed with two clear goals: solve the filtering problem correctly, and make the form experience noticeably better. Here is how each feature serves those goals.   1. Card-Per-Standard Layout with Checkbox Selection Each certification standard is rendered as its own self-contained card — with a title, a short description of what that standard covers, and a checkbox. This immediately solves the visual grouping problem that a flat subgrid cannot address. Clicking a card (or its checkbox) marks that standard as In Scope The selected card highlights with a blue left-border accent and a soft background tint — making it immediately clear which standards are active Deselected cards remain compact and unobtrusive, keeping the form clean For assessors reviewing multiple applications, being able to scan the full scope at a glance — without opening related records or reading through a grid — is a meaningful time saving.   2. Filtered Version Dropdown — Per Standard This is the feature that directly solves the original problem. When a card is selected and expands, the Standard Version dropdown is populated exclusively from the versions array within that standard’s JSON row. A user working on a Full Scope card sees only Full Scope versions A user working … Continue reading From Default to Dynamic: Transforming Dynamics CRM Subgrids with Custom HTML for a Netherlands-Based Sustainability Certification Non-Profit

Share Story :

How We Built & Deployed a Mobile-Based Canvas App for Unified Time, Expense (with Receipts) & Material Submission with Project-Based Approvals for a US Cybersecurity Firm

Summary A US-based oil & gas cybersecurity firm implemented a mobile-first Canvas App integrated with Dynamics 365 Project Operations to unify time, expense, and material submission, tracking, and approval. The solution enabled project-specific approval workflows where only assigned approvers could validate submitted records. CloudFronts introduced a dual-mode interface (Day Mode and Week Mode) to improve usability for both field engineers and managers. Submission and approval cycle time reduced from hours/days to near real-time visibility. Table of Contents 1. Customer Scenario 2. Solution Overview 3. Key UX Features 4. Functional Implementation 5. Solution Walkthrough 6. Architecture & Integration Approach 7. Business Impact 8. FAQs 9. Conclusion Customer Scenario A Texas-based cybersecurity firm specializing in operational technology (OT) security for oil rigs manages multiple concurrent field projects using Dynamics 365 Project Operations. Employees and resources were responsible for logging: Time entries Expense entries (travel, accommodation, airfare, etc.) Material usage logs (equipment, parts, consumables, etc.) However, the system was not designed for mobile-first usage, and processes were fragmented across multiple interfaces. Key Challenges Field engineers & other Resources could not efficiently submit entries from mobile devices Time, expense, and material tracking existed in separate workflows Approval processes had to be restricted to project-specific stakeholders Project managers lacked real-time visibility into resource usage • Delays in submission can cause downstream billing and reporting issues Project tracking accuracy can get compromised, and reporting delays directly affected client communication and billing cycles. Solution Overview CloudFronts designed and deployed a unified mobile application using Power Apps (Canvas Apps) integrated with Dynamics 365 Project Operations. Objective: One app → All submissions → Controlled approvals → Real-time visibility What the App Enables For Field Users: Submit time entries (daily or weekly) Create expense entries with receipt validation Log material consumption against projects Track submission status instantly For Project Approvers: View only entries related to assigned projects Approve or reject submissions directly from mobile Maintain audit-ready approval workflows Key UX Features The application is designed with a strong focus on usability for both resources and project approvers, ensuring a seamless mobile experience across submission and approval workflows. 1. Day Mode / Week Mode Toggle The app provides a flexible entry experience through a dual-mode interface: Day Mode: Enables detailed entry for a single day, ideal for precise logging and corrections. Week Mode: Allows bulk entry across multiple days, reducing effort for repetitive data entry. This flexibility significantly improves usability across different working styles and scenarios. 2. Calendar-Based Swipe Navigation The application introduces a Dynamics-style calendar navigation with swipe support, allowing users to: Traverse across multiple days or weeks effortlessly View and manage multiple submission records in sequence Navigate between historical and current entries with minimal effort This mobile-first interaction design reduces friction in high-frequency data entry scenarios. 3. Unified Submission & Approval Experience The UI/UX is intentionally designed to mirror the complete lifecycle of a record, ensuring consistency between submission and approval stages. Each record follows a structured lifecycle aligned with Dynamics 365 stages: Submitted Pending Approved Rejected Recall Requested Recall Request Approved Recall Request Rejected The interface dynamically adapts based on the current stage: Action buttons (Approve, Reject, Recall, etc.) are conditionally visible Status indicators are clearly displayed Users experience the same structured flow from creation to closure This ensures clarity, reduces errors, and improves user confidence in the system. 4. Dynamic Action-Based UI (Smart Button Behavior) The app intelligently modifies UI controls based on record state: Submit button appears only for draft entries Approve/Reject buttons are visible only to project approvers Recall option is available only after submission Post-approval states restrict further edits This enforces role-based and state-based control, preventing invalid actions and maintaining process integrity. 5. Conditional Receipt Upload for Expense Entries Expense submission logic is enhanced with category-driven validation: Mandatory: Airline tickets, OT hardware purchases Optional: Meals, local travel This balances compliance requirements with user convenience, avoiding unnecessary friction. 6. On-Demand Data Refresh Users can manually refresh data within the app to: Fetch the latest submission and approval statuses Sync newly created or updated records Ensure real-time visibility without relying solely on background refresh Especially useful in environments with intermittent connectivity. 7. Mobile-First Interaction Design Touch-friendly controls Swipe navigation Lightweight screens for faster performance Minimal navigation depth This ensures field engineers working in remote or on-site environments can operate efficiently. Functional Implementation This section outlines how the solution was implemented within Dynamics 365 Project Operations and the Power Platform to enable end-to-end submission and approval management. 1. Unified Data Model in Dataverse All three entry types — Time, Expense, and Material — are structured within Dataverse and linked to: Project Resource (User) Approval records Supporting documents (for expenses) Each submission creates a corresponding record with a defined lifecycle stage, ensuring consistency across all entry types. 2. Submission Logic from Canvas App Each submission type follows a structured flow: User selects project and entry type (Time / Expense / Material) Required fields are validated based on entry type Conditional logic enforces: Receipt requirement (for specific expense categories) Mandatory fields (based on business rules) Record is created in Dataverse Submission triggers backend approval workflow This ensures that all records entering the system are complete, validated, and ready for approval processing. 3. Approval Record Creation & Routing Upon submission: A corresponding approval record is automatically created The system identifies project-specific approvers Key behavior: Only assigned project approvers can view and act on records Approval actions update the main record status 4. Record Lifecycle Management (Status-Driven System) Lifecycle: Draft → Submitted → Pending → Approved / Rejected → Recall Flow Users submit records → moves to Submitted Approvers review → Approved or Rejected Users request recall → Recall Requested Approvers respond → Recall Approved or Rejected Controlled through: Power Apps UI logic MS Bound Actions for submission and approval handling Dataverse status fields 5. Expense Receipt Handling (Integrated from Previous Solution) Receipt upload enforced conditionally Files stored as Notes (Annotations) in Dataverse Linked to expense records This eliminates manual document handling and ensures compliance. Solution Walkthrough The following walkthrough … Continue reading How We Built & Deployed a Mobile-Based Canvas App for Unified Time, Expense (with Receipts) & Material Submission with Project-Based Approvals for a US Cybersecurity Firm

Share Story :

Building a Reliable Bronze Silver Gold Data Pipeline in Databricks for Enterprise Reporting

Summary Modern analytics platforms require structured data pipelines that ensure reliability, consistency, and governance across reporting systems. Traditional ETL approaches often struggle to scale as data volume and complexity increase. This blog explains how the Bronze–Silver–Gold (Medallion) architecture in Databricks provides a scalable and reliable framework for organizing data pipelines. It highlights how each layer serves a specific purpose, enabling better data quality, governance, and seamless integration with reporting tools such as Power BI. The Real Problem: Reporting Pipelines Become Fragile Over Time In many organizations: This leads to unreliable reporting and increased maintenance effort. What Is the Bronze–Silver–Gold Architecture? The Medallion architecture organizes data into three layers: Bronze Layer Raw data ingestion layer. Silver Layer Cleaned and standardized data. Gold Layer Business-ready, reporting-optimized data. Each layer has a clear responsibility. Bronze Layer: Raw Data Ingestion Purpose Key Characteristics Bronze acts as the system of record. Silver Layer: Data Standardization Purpose Key Activities Silver creates reusable datasets across reporting use cases. Gold Layer: Reporting-Ready Data Purpose Key Characteristics Gold tables are consumed directly by reporting tools. Why This Architecture Works 1. Separation of Concerns Each layer has a defined role, reducing complexity. 2. Improved Data Quality Data is progressively refined from raw to curated. 3. Better Performance Reporting queries run on optimized Gold tables. 4. Governance with Unity Catalog Access can be controlled at each layer: Common Implementation Mistakes These mistakes lead to long-term instability. Business Impact To conclude, the Bronze–Silver–Gold architecture provides a strong foundation for building scalable and reliable data pipelines in Databricks. When combined with proper governance and disciplined design, it enables organizations to deliver consistent, high-quality data for analytics and decision-making. We hope you found this article useful. If you would like to explore how AI-powered customer service can improve your support operations, please contact us at transform@cloudfronts.com.

Share Story :

Stop Hard-Coding Recipients: Streamlining Email Automation with Dataverse and Power Automate for a U.S.-Based Window and Door Manufacturer

Summary A window and door manufacturing company based in the United States, specializing in energy-efficient fenestration products, eliminated brittle hard-coded email recipients from their sales automation by adopting a Dataverse-driven approach in Power Automate. CloudFronts implemented a dynamic recipient resolution pattern using coalesce and createArray expressions, pulling the To and CC parties directly from opportunity record lookups in Microsoft Dynamics 365 CRM. The solution handles null lookups gracefully, scales as team structures change, and requires zero flow edits when personnel or roles shift. Business impact: Reduced flow maintenance overhead, eliminated misdirected emails caused by stale hard-coded addresses, and established a reusable pattern applicable across multiple automation scenarios. About the Customer The customer is a U.S.-based manufacturer of custom steel windows and doors, serving commercial, residential, and architectural projects. Established in the mid-1980s, the company specializes in high-performance, energy-efficient fenestration systems designed for both modern and heritage applications. They rely on Microsoft Dynamics 365 CRM to manage their sales pipeline, opportunity tracking, and customer communications across a distributed sales network. Their sales process involves multiple stakeholders per opportunity, including an opportunity owner, primary customer contact, forwarding representative, and regional sales representative, all of whom may need to be included in outbound communications at different stages of the deal lifecycle. The Challenge When the organization first automated opportunity-related emails through Power Automate, recipient addresses were defined statically inside the flow. A specific mailbox was hard-coded as the CC address, and To recipients were manually entered per scenario. This approach worked initially but quickly became a source of ongoing problems: Stale recipients: When team members changed roles or left the organization, flows continued sending emails to incorrect or inactive addresses, requiring a developer to open the flow and update it manually every time. No relationship to CRM data: The recipient list in the flow had no connection to who was actually assigned to the opportunity in Dynamics 365 CRM. The two could easily fall out of sync. Scalability and maintenance burden: As the number of automated flows grew, so did the number of places where email addresses were hard-coded. A single personnel change could require updates across multiple flows, increasing both effort and the risk of missing one. Inability to handle variable stakeholders: Not every opportunity has the same set of involved parties. Some have a forwarding representative, others do not. Some have a dedicated sales representative assigned, while others rely only on the owner. A static recipient list cannot handle this variability. The organization needed a recipient model that was driven entirely by what was recorded in CRM, not by what a developer had typed into a flow months earlier. The Solution CloudFronts redesigned the email automation to resolve all recipients dynamically at runtime, using lookup field values from the opportunity record in Dataverse. No email addresses are stored in the flow itself. Technologies Used Microsoft Dynamics 365 CRM, Source of opportunity data, ownership, and stakeholder relationships Power Automate, Orchestration layer for the email automation Dataverse connector, Real-time retrieval of opportunity record and related lookup fields Email activity (CRM), Target entity for structured email creation with party list support What CloudFronts Configured The flow fetches the opportunity record from Dataverse as its first action after the trigger. From that single record, four lookup fields are evaluated, the record owner (_ownerid_value), the opportunity contact (_cf_opportunitycontact_value), a forwarding sales representative (_ow_forwardingtosalesrep_value), and the primary sales representative (_ow_salesrep_value). Each lookup is conditionally included in the recipient array only if it is not null. If a lookup field has no value on a given opportunity, it is excluded entirely, the flow does not error, and no placeholder address fills the gap. The recipient array is constructed using a single coalesce + createArray expression, producing a clean party list that is passed directly into the email activity creation step. The participationtypemask value distinguishes the To recipient (mask 1, the owner via systemusers) from CC recipients (mask 2, contacts). Power Automate Flow Walkthrough The diagram above illustrates the end-to-end structure of the flow. Below is a breakdown of each stage. Step 1, Trigger The flow is triggered by a CRM event such as an opportunity stage change, a manual button, or a scheduled recurrence. Step 2, Get opportunity record A Dataverse action retrieves the full opportunity record including all lookup fields. Step 3, Build the recipients array This is the core of the solution: coalesce( createArray( if( not(equals(outputs(‘Get_Opportunity_Record’)?[‘body/_ownerid_value’], null)), json(concat( ‘{"participationtypemask": 1,"partyid@odata.bind": "systemusers(‘, outputs(‘Get_Opportunity_Record’)?[‘body/_ownerid_value’], ‘)"}’ )), null ), if( not(equals(outputs(‘Get_Opportunity_Record’)?[‘body/_cf_opportunitycontact_value’], null)), json(concat( ‘{"participationtypemask": 2,"partyid@odata.bind": "contacts(‘, outputs(‘Get_Opportunity_Record’)?[‘body/_cf_opportunitycontact_value’], ‘)"}’ )), null ), if( not(equals(outputs(‘Get_Opportunity_Record’)?[‘body/_cf_forwardingtosalesrep_value’], null)), json(concat( ‘{"participationtypemask": 2,"partyid@odata.bind": "contacts(‘, outputs(‘Get_Opportunity_Record’)?[‘body/_cf_forwardingtosalesrep_value’], ‘)"}’ )), null ), if( not(equals(outputs(‘Get_Opportunity_Record’)?[‘body/_cf_salesrep_value’], null)), json(concat( ‘{"participationtypemask": 2,"partyid@odata.bind": "contacts(‘, outputs(‘Get_Opportunity_Record’)?[‘body/_cf_salesrep_value’], ‘)"}’ )), null ) ) ) Each lookup is checked for null and included only when present, producing a clean, variable-length recipient list from CRM data. Step 4, Null checks per lookup Missing stakeholders are simply excluded without breaking the flow. Step 5, Create email activity The recipient list is passed into Dataverse email activity creation. Step 6, Email sent Recipients are resolved dynamically from CRM data at runtime. Business Impact Metric Before After Recipient source Hard-coded in flow Live from CRM opportunity record Personnel change handling Manual flow edit required Automatic, CRM update is sufficient Variable stakeholder support Not possible Supported natively Misdirected email risk High Eliminated Flow maintenance effort Per-change developer intervention None for recipient changes The organization now operates email automation where the flow itself never needs to be edited when team structures shift. Updating the opportunity record in CRM is the single source of truth, and the flow responds accordingly at runtime. Frequently Asked Questions What if all lookup fields are null on an opportunity? The createArray expression will produce an array of null values, and coalesce will return an empty or minimal array. It is recommended to add a condition step before the email creation to check that at least one valid recipient exists and to handle the empty case, such as logging a CRM note or notifying an administrator, rather than … Continue reading Stop Hard-Coding Recipients: Streamlining Email Automation with Dataverse and Power Automate for a U.S.-Based Window and Door Manufacturer

Share Story :

Stop Creating Entities: Simplifying CRM with JSON and Custom HTML for a Sustainability Certification Non-Profit in the Netherlands

Summary A non-profit sustainability certification organization reduced CRM complexity by replacing multiple custom entities with a JSON-based data structure in Microsoft Dynamics 365 CRM. CloudFronts implemented a custom HTML interface to dynamically render input fields and manage document uploads within a single, unified UI. The approach eliminated repeated schema changes, reduced admin overhead, and enabled faster adaptation to evolving certification requirements. Business impact: Reduced CRM customization overhead, accelerated onboarding of new certification types, and a more maintainable solution that scales without structural rework. About the Customer The customer is a non-profit organization focused on sustainability certification across industries. They operate across multiple certification programs, each with distinct documentation requirements, input fields, and approval workflows. Their team relies on Microsoft Dynamics 365 CRM as the central platform for managing certification applications, applicant data, and compliance records. The Challenge Microsoft Dynamics 365 CRM is built for structured data — but not all business processes follow fixed structures. The organization managed several certification programs, each requiring different sets of input fields, document uploads, and validation logic. Initially, each new certification type was handled by creating a new custom entity or modifying existing ones to accommodate the required fields. While this worked for a small number of programs, the approach quickly revealed significant limitations: Schema rigidity: Every time a new certification type was introduced, or an existing one updated, the CRM schema had to be modified. This meant new fields, new relationships, and repeated deployment cycles. Administrative overhead: Each schema change required coordination between developers and CRM administrators, creating delays and dependency bottlenecks. Inconsistent UI experience: With different entities handling different certification types, the user interface lacked consistency. Applicants and internal users faced a fragmented experience depending on which program they were working in. Scalability ceiling: The entity-per-program model was not designed to scale. Adding a tenth or fifteenth certification type would exponentially increase the complexity of the CRM data model. Document management friction: Handling document uploads across multiple entities was cumbersome, with no unified approach to tracking submission status or linking files to the correct certification record. The organization needed a solution that could accommodate evolving certification structures without requiring constant schema modifications or developer intervention. The Solution CloudFronts redesigned the data architecture by replacing the multi-entity model with a JSON-based structure stored within Dynamics 365 CRM, paired with a custom HTML interface to dynamically render the appropriate fields and manage document workflows. Technologies Used Microsoft Dynamics 365 CRM, Core platform for certification records, applicant data, and workflow management JSON, Flexible data structure for storing dynamic certification inputs within a single CRM field Custom HTML with JavaScript, Dynamic front-end interface rendered within the CRM form, replacing static entity-based layouts Power Automate, Supporting workflows for notifications, approvals, and document status updates What CloudFronts Configured Rather than creating a separate entity for each certification type, CloudFronts introduced a single Certification Application entity with a dedicated JSON field to store all variable inputs. A configuration-driven approach was used, each certification type is defined by a schema that specifies which fields to show, what validations to apply, and which documents are required. The custom HTML interface reads this configuration at runtime and dynamically renders the correct form, no code changes required when a new certification type is added or an existing one is modified. The same interface handles document uploads, linking each file to its corresponding certification record and tracking submission status in real time. CloudFronts also implemented role-based visibility within the HTML component, ensuring that internal reviewers, applicants, and administrators each see only the sections relevant to their function. Business Impact Metric Before After Adding a new certification type Requires schema changes and deployment Configuration update only UI consistency Fragmented across entities Unified interface for all programs Developer dependency High, every change needed development effort Low, administrators manage configurations Document tracking Manual, per entity Centralized and automated CRM data model complexity Growing with each program Stable and maintainable The organization can now onboard new certification programs in a fraction of the time, without touching the underlying CRM schema. Internal teams manage certification configurations independently, and the development team focuses on feature improvements rather than reactive schema maintenance. Frequently Asked Questions When should I use JSON instead of CRM entities? JSON is a strong fit when input structures vary frequently, differ across record types, or are driven by business rules that change regularly. If your data model is stable and relational, entities remain the better choice. Is it possible to query or filter on JSON data in CRM? Direct filtering on JSON fields in Dynamics 365 is limited. CloudFronts structured the solution so that key filterable attributes, such as certification type, status, and applicant ID, remain as standard CRM fields, while the variable payload lives in JSON. Does the custom HTML approach work on mobile? Yes. The HTML web resource is built to be responsive and functions within the Dynamics 365 mobile app, though optimal use is on desktop given the complexity of certification forms. Can this approach support approval workflows? Yes. Power Automate workflows trigger based on standard CRM field changes, such as status updates, and do not depend on the JSON structure, keeping workflow logic clean and maintainable. Conclusion Not every data problem in CRM needs a new entity. When business requirements are variable and evolving, as they often are in certification, compliance, and document-heavy workflows, a rigid entity model can become a liability rather than an asset. By combining JSON-based storage with a dynamic HTML interface, CloudFronts helped this organization build a CRM solution that adapts to change without requiring structural rework. The result is a leaner data model, a more consistent user experience, and a team that can move faster because they are no longer dependent on developer cycles for every process update. Sometimes the best CRM architecture is the one that knows when not to add more to the schema. We hope you found this article useful. If you would like to explore how AI-powered customer service can improve your support … Continue reading Stop Creating Entities: Simplifying CRM with JSON and Custom HTML for a Sustainability Certification Non-Profit in the Netherlands

Share Story :

From Manual to Automated: Scalable Client Statement Reporting with Power BI for a Houston-Based Enterprise Security Services Firm

Summary A services firm based in Houston, Texas, specializing in enterprise security solutions, improved operational efficiency by transitioning from Excel-based reporting to Power BI Paginated Reports, implemented by CloudFronts. CloudFronts designed a structured, client-ready reporting solution integrated with Dynamics 365 CRM. The solution supports manual distribution today while being fully prepared for future automation such as scheduled PDF delivery. Business impact: Improved operational efficiency, standardized reporting, and scalability without rework. Client-ready account statement using Power BI Paginated Reports About the Customer As a 9x Microsoft Gold Partner and 6x Microsoft Advanced Specialization-endorsed organization based in Texas, U.S., the customer specializes in delivering solutions for critical business needs across systems management, security, data insights, and mobility. The Challenge Initially, the organization generated account statements manually using Excel for a small number of clients. While this approach worked at a smaller scale, it presented several limitations: Manual effort and inefficiency: Reports had to be created individually for each client. Lack of standardization: Formatting and structure varied across reports. Scalability concerns: While effective for a small client base, the process was not designed to scale as the business grows to 30–50+ clients. Technology decision gap: The team required guidance on choosing between SSRS and Power BI Paginated Reports, along with future automation capabilities. As a result, the organization needed a solution that addressed current inefficiencies while preparing for future scale. The Solution CloudFronts implemented Power BI Paginated Reports, integrated with Dynamics 365 CRM, to create structured, print-ready account statements. Technologies Used Dynamics 365 CRM — Source of funding, account, and transaction data Power BI Paginated Reports — Designed pixel-perfect, client-facing statements Power BI Service — Enabled hosting and future automation capabilities What CloudFronts Configured CloudFronts designed a paginated report tailored for client communication, including account summaries, transaction-level details, and allocation tracking. The solution includes parameterized filtering for month, account, and funding status, enabling efficient report generation across multiple clients. The report was built with a strong emphasis on consistency, print-ready formatting, and reusability—ensuring that reports can be generated without redesign as the business grows. CloudFronts also guided the customer in selecting Power BI Paginated Reports over SSRS to ensure better alignment with the Power BI ecosystem and support for future automation such as subscription-based PDF delivery. Key Implementation Decisions Replacing Excel with Paginated Reports: Improved standardization and reduced manual effort. Choosing Paginated Reports over SSRS: Enabled seamless integration with Power BI Service and future automation readiness. Designing for scalability: Built a solution that works manually today but supports automation in the future. Business Impact Metric Before After Report Creation Manual Excel-based System-generated reports Operational Efficiency Low Significantly improved Scalability Limited Ready for growth Consistency Variable Standardized The organization now operates with a structured reporting system that reduces manual effort while being fully prepared for future automation. Frequently Asked Questions Should I use SSRS or Power BI Paginated Reports? If you are using Power BI, Paginated Reports are a better choice due to seamless integration and future automation support. Can I automate PDF report delivery later? Yes. Paginated Reports support subscription-based delivery for automated PDF emails. Do I need automation from day one? No. It is more effective to design a scalable solution first and introduce automation as the business grows. Conclusion This implementation highlights that effective reporting is not just about automation—it is about designing for scalability from the beginning. By choosing Power BI Paginated Reports, the organization built a solution that meets current needs while avoiding future rework as they grow. Not every reporting requirement needs a dashboard or immediate automation. A well-designed structured report can often be the most scalable solution. Read the full case study here: Invoke We hope you found this article useful. If you would like to explore how AI-powered customer service can improve your support operations, please contact us at transform@cloudfronts.com. Deepak Chauhan | Consultant, CloudFronts

Share Story :

Understanding the Difference Between Temporary Tables and SourceTableTemporary in Business Central

Summary In Microsoft Dynamics 365 Business Central, performance and data handling are critical especially when dealing with intermediate calculations, staging data, or processing large datasets. Developers often come across two commonly used approaches: At first glance, both seem to do the same thing: store data temporarily without writing to the database. But in reality, they serve different purposes and behave differently in real-world scenarios. This blog explains: 1] What Temporary Tables are 2] What SourceTableTemporary is 3] Key differences between them 4] When to use which approach 5] Real-world development scenarios Table of Contents The Real Problem: Handling Temporary Data Efficiently Let’s take a real development scenario. You are building a customization where: Example Use Cases 1] Generating preview reports 2] Aggregating data before posting 3] Showing calculated insights on a page 4] Temporary staging before validation The Challenge If you use normal tables: If you misuse temporary structures: So the key question becomes: Should you use a Temporary Table variable or SourceTableTemporary? What are Temporary Tables? Temporary tables are record variables that exist only in memory and are not stored in the SQL database. Key Characteristics var    TempSalesLine: Record “Sales Line” temporary; Behavior Example TempSalesLine.Init();TempSalesLine.”Document No.” := ‘TEMP001’;TempSalesLine.Insert(); This record exists only during runtime and never touches the database. What is SourceTableTemporary? SourceTableTemporary is a Page-level property. It makes the entire page operate on a temporary version of its Source Table. Definition SourceTableTemporary = true; Key Characteristics Behavior Example trigger OnOpenPage()begin    Rec.Init();    Rec.”No.” := ‘TEMP001’;    Rec.Insert();end; Here, Rec is temporary because the page is set to SourceTableTemporary = true. Key Differences Aspect Temporary Table SourceTableTemporary Scope Variable-level Page-level Usage Backend logic UI Pages Data Lifetime Until variable is cleared Until page is closed Control Full AL control Page-driven UI Binding Not directly bound to UI Directly bound to UI Use Case Processing, calculations Displaying temporary data Practical Scenarios Scenario 1: Data Processing Logic You are calculating totals before posting a document. Use Temporary Tables Why? Scenario 2: Showing Preview Data on a Page You want to show: Use SourceTableTemporary Why? Scenario 3: Hybrid Use Case Sometimes you: Best Practice: Why Choosing the Right Approach Matters Using the wrong approach can lead to: Problem Cause Data not visible on UI Using only temporary variables Performance issues Writing unnecessary records Complex cleanup logic Using physical tables instead of temporary UI inconsistency Misusing SourceTableTemporary Business Impact 1. Improved Performance Temporary data handling reduces database load and improves execution speed. 2. Cleaner Data Architecture No unnecessary records stored → no cleanup jobs required. 3. Better User Experience Users can preview and interact with data without affecting actual records. 4. Safer Development Practices Avoids accidental data writes and improves system stability. 5. Flexible Customizations Developers can build simulation, preview, and staging features easily. 6. Reduced Maintenance Effort No need for background jobs to delete temporary records. Final Thoughts Both Temporary Tables and SourceTableTemporary are powerful tools—but they are not interchangeable. Think of it like this: Choosing the right one depends on where your logic lives: I hope you found this blog useful! “Discover How We’ve Enabled Businesses Like Yours – Explore Our Client Testimonials!”  Please feel free to connect with us at transform@cloudfronts.com 

Share Story :

Optimizing Power BI Dataset Performance Using Incremental Refresh for Large-Scale Analytics.

Posted On April 10, 2026 by Siddhesh Pal Posted in Tagged in

Summary Use Case / Why This Matters Prerequisites Before implementing incremental refresh in Microsoft Power BI, ensure the following: Step-by-Step Implementation Step 1: Create Parameters (RangeStart & RangeEnd) This step defines the data boundaries for incremental refresh. These parameters will control which data gets refreshed. Step 2: Apply Filter in Power Query This step filters the dataset using the parameters. Select your date column Apply filter: DateColumn >= RangeStart AND DateColumn < RangeEnd This ensures only relevant data is processed. Step 3: Enable Query Folding This step ensures filtering happens at the data source level. Right-click last step → View Native Query If available → Query folding is enabled Query folding is critical for performance optimization. Step 4: Configure Incremental Refresh Policy This step defines how much data to store and refresh. This creates partitions in the dataset. Step 5: Publish to Power BI Service This step activates incremental refresh in the cloud. After publishing, Power BI automatically manages partitions. Business Impact Following the implementation, organizations achieved the following results Metric Before After Dataset refresh time 2–3 hours (full refresh) 30–45 minutes Data processing load Entire dataset processed Only recent data processed Report performance Slow with large datasets Faster load & interaction System resource usage High Optimized and controlled Incremental refresh significantly improves scalability and ensures consistent performance for enterprise reporting. To conclude, Incremental refresh in Microsoft Power BI transforms how organizations handle large datasets by reducing refresh times and improving performance. By implementing proper data filtering, query folding, and refresh policies, businesses can scale their analytics without compromising speed. As data volumes continue to grow, adopting incremental refresh is no longer optional—it is essential for efficient and cost-effective reporting. If your Power BI reports are slowing down due to large datasets, start implementing Incremental Refresh today. Begin by identifying your date columns, defining parameters, and configuring refresh policies. A small change can lead to massive performance improvements in your reporting environment. We hope you found this blog useful. If you would like to learn more or discuss similar solutions, feel free to reach out to us at transform@cloudfronts.com.

Share Story :

Understanding VertiPaq Engine Internals for Better Power BI Performance Optimization

Posted On April 9, 2026 by Siddhesh Pal Posted in Tagged in

Summary Prerequisites Before diving into VertiPaq optimization, ensure you have: Step-by-Step Understanding of VertiPaq Internals Step 1: Columnar Storage Architecture VertiPaq stores data in a columnar format instead of rows, enabling faster scanning and better compression. Impact: Reduces query execution time significantly. Step 2: Data Compression Techniques VertiPaq applies advanced compression techniques: Impact: Reduces memory footprint and improves performance. Step 3: Segmentation and Partitions VertiPaq divides data into segments for efficient processing. Impact: Faster query execution and scalability. Step 4: Cardinality Optimization Cardinality refers to the number of unique values in a column. Best Practices: Step 5: Relationship and Model Design Efficient relationships improve VertiPaq performance. Impact: Reduces query complexity and improves performance. Business Impact Following optimization based on VertiPaq principles, organizations achieved: Metric Before After Report load time 15–20 seconds 5–8 seconds Dataset size 1.5 GB 600 MB Query performance Slow with complex models Optimized and responsive User experience Lagging dashboards Smooth interaction To conclude, understanding the VertiPaq engine in Microsoft Power BI is key to unlocking high-performance analytics. By optimizing data models with proper structure, compression techniques, and relationships, organizations can achieve faster insights and scalable reporting. As datasets grow in size and complexity, mastering VertiPaq internals becomes essential for every Power BI developer and data professional. If you want to build high-performance Power BI reports, start by analyzing your data model and optimizing it based on VertiPaq principles. A small improvement in data structure can lead to massive gains in performance. We hope you found this blog useful. If you would like to learn more or discuss similar solutions, feel free to reach out to us at transform@cloudfronts.com.

Share Story :

From Pipeline to Payment: Designing a Sales Performance Dashboard

Posted On April 8, 2026 by Deepak Chauhan Posted in

Summary Many organizations track sales performance using pipeline and won revenue dashboards. However, these views often stop short of showing how much revenue is actually realized. For a services firm based in Houston, Texas, specializing in digital transformation and enterprise security solutions, this gap created challenges in understanding real business performance and tracking commissions accurately. This article explains how a connected sales dashboard was designed to bring together pipeline, contracts, and invoicing—providing a complete view from deal to realized revenue. Sales Performance Dashboard showing pipeline to revenue flow Table of Contents 1. Why This Gap Exists 2. Limitation of Traditional Sales Dashboards 3. From Pipeline to Payment 4. Designing the Dashboard 5. The Value of a Unified View 6. The Outcome Why This Gap Exists In many organizations, all sales-related data exists within Dynamics 365 CRM, including opportunities, contracts, order lines, and invoices. However, reporting is often built in stages based on different business needs. Sales teams focus on opportunities and closed deals, while finance teams rely on contract, billing, and invoice data. Over time, separate reports are created for each purpose. While each report works well independently, they are not always connected in a single flow. As a result, answering simple business questions becomes difficult, such as how much of the won revenue is invoiced, which deals are generating actual revenue, and whether commissions are aligned with realized value. Limitation of Traditional Sales Dashboards Most sales dashboards focus on metrics such as won revenue, win rate, deal size, and pipeline value. These provide a good view of sales activity but do not fully reflect business outcomes. A deal marked as won may still be pending contract execution, split across multiple order lines, or not yet invoiced. This creates a disconnect between reported performance and actual revenue realization. As a result, leadership sees growth in numbers, but lacks clarity on how much value has truly been earned. From Pipeline to Payment To address this, the dashboard needs to follow the complete lifecycle of a deal, from opportunity to realized revenue. Opportunity leads to Total Contract Value (TCV), which flows into contracts, then to order lines, followed by invoices, and finally results in realized revenue. Each stage provides a different perspective, ensuring that reporting captures not just intent, but actual business impact. Designing the Dashboard The dashboard was designed in layers to keep it simple while ensuring full visibility across the revenue lifecycle. The first layer provides a snapshot of sales performance, including won revenue, win rate, deal size, deal age, and lost revenue. Supporting visuals such as revenue trends, industry distribution, and geographic spread help leadership understand overall performance and where the business is coming from. The next layer focuses on what drives revenue. By breaking down data across solution areas, industries, regions, and account managers, the dashboard highlights which segments contribute the most and where future efforts should be focused. Once deals are won, contract-level visibility provides clarity on how revenue is structured. It highlights contract types, classifications, and overall value, helping teams understand how revenue will flow from a billing perspective. The dashboard then moves into order line and profitability insights. This layer connects revenue with estimated cost, margin, and profit contribution, allowing the business to evaluate the quality of deals rather than just their size. Finally, invoice-level visibility completes the picture by showing billed amounts, invoice status, and realized revenue. This ensures that the dashboard reflects actual business performance rather than just sales activity. The Value of a Unified View By bringing all these elements together, the organization moved from fragmented reporting to a single, connected view of sales and revenue. This was enabled by combining data across opportunities, contracts, order lines, and invoices into a unified reporting model :contentReference[oaicite:0]{index=0} The result is improved visibility, better alignment between teams, and more reliable decision-making. The Outcome 1. Clear visibility from pipeline to realized revenue 2. Improved alignment between sales and finance teams 3. Better tracking of commissions based on actual performance 4. Reduced manual effort in reconciling multiple reports We hope you found this blog useful. If you would like to learn more or discuss similar solutions, feel free to reach out to us at transform@cloudfronts.com.

Share Story :

SEARCH BLOGS:

FOLLOW CLOUDFRONTS BLOG :


Categories

Secured By miniOrange